12 research outputs found

    3D Capture and 3D Contents Generation for Holographic Imaging

    Get PDF
    The intrinsic properties of holograms make 3D holographic imaging the best candidate for a 3D display. The holographic display is an autostereoscopic display which provides highly realistic images with unique perspective for an arbitrary number of viewers, motion parallax both vertically and horizontally, and focusing at different depths. The 3D content generation for this display is carried out by means of digital holography. Digital holography implements the classic holographic principle as a two‐step process of wavefront capture in the form of a 2D interference pattern and wavefront reconstruction by applying numerically or optically a reference wave. The chapter follows the two main tendencies in forming the 3D holographic content—direct feeding of optically recorded digital holograms to a holographic display and computer generation of interference fringes from directional, depth and colour information about the 3D objects. The focus is set on important issues that comprise encoding of 3D information for holographic imaging starting from conversion of optically captured holographic data to the display data format, going through different approaches for forming the content for computer generation of holograms from coherently or incoherently captured 3D data and finishing with methods for the accelerated computing of these holograms

    Fast Numerical Reconstruction of Integral Imaging Based on a Determined Interval Mapping

    No full text
    In this paper, a fast numerical reconstruction of the integral imaging based on a determined interval mapping is proposed. To reduce the computation time, the proposed method employs the determined interval mapping instead of the use of magnification. In the numerical reconstruction procedure, the acquired elemental image array (EIA) from the 3D object is displayed. The flipped elemental image (EI)s are numerically formed by the virtual pinhole array. Then, the determined interval depending on the reconstruction plane is calculated and applied to each flipped EI. These flipped EIs are shifted to match the determined interval at the reconstruction plane and superimposed together. After this superimposed image is divided by the number of the superposition, the position error between the location of the shifted EI and the pixel position of the reconstruction plane is corrected by interpolation. As a result, the refocused image depending on the reconstruction plane can be reconstructed rapidly. From the experimental result, we confirmed that the proposed method largely decreased the computation time compared with the conventional method. In addition, we verified that the quality of the reconstruction by the proposed method is higher than the conventional method by the use of the structural similarity index method

    Dynamic speckle analysis with smoothed intensity-based activity maps

    Get PDF
    Pointwise intensity-based algorithms are the most popular algorithms in dynamic laser speckle measurement of physical or biological activity. The output of this measurement is a two-dimensional map which qualitatively separates regions of higher or lower activity. In the paper, we have proposed filtering of activity maps to enhance visualization and to enable quantitative determination of activity time scales. As a first step, we have proved that the severe spatial fluctuations within the map resemble a signal-dependent noise. As a second step, we have illustrated implementation of the proposed idea by applying filters to non-normalized and normalized activity estimates derived from synthetic and experimental data. Statistical behavior of the estimates has been analyzed to choose the filter parameters, and substantial narrowing of the probability density functions of the estimates has been achieved after the filtering. The filtered maps exhibit an improved contrast and allowed for quantitative description of activity.publishedVersionPeer reviewe

    Optical reconstruction of transparent objects with phase-only SLMs

    No full text
    Three approaches for visualization of transparent micro-objects from holographic data using phase-only SLMs are described. The objects are silicon micro-lenses captured in the near infrared by means of digital holographic microscopy and a simulated weakly refracting 3D object with size in the micrometer range. In the first method, profilometric/tomographic data are retrieved from captured holograms and converted into a 3D point cloud which allows for computer generation of multi-view phase holograms using Rayleigh-Sommerfeld formulation. In the second method, the microlens is computationally placed in front of a textured object to simulate the image of the textured data as seen through the lens. In the third method, direct optical reconstruction of the micrometer object through a digital lens by modifying the phase with the Gerchberg-Saxton algorithm is achieved. (C) 2013 Optical Society of Americ
    corecore